screen reader
HACI: A Haptic-Audio Code Interface to Improve Educational Outcomes for Visually Impaired Introductory Programming Students
This thesis introduces the Haptic-Audio Code Interface (HACI), an educational tool designed to enhance programming education for visually impaired (VI) students by integrating haptic and audio feedback to compensate for the absence of visual cues. HACI consists of a non-resource-intensive web application supporting JavaScript program development, execution, and debugging, connected via a cable to an Arduino-powered glove with six integrated haptic motors to provide physical feedback to VI programmers. Motivated by the need to provide equitable educational opportunities in computer science, HACI aims to improve non-visual code navigation, comprehension, summarizing, editing, and debugging for students with visual impairments while minimizing cognitive load. This work details HACI's design principles, technical implementation, and a preliminary evaluation through a pilot study conducted with undergraduate Computer Science students. Findings indicate that HACI aids in the non-visual navigation and understanding of programming constructs, although challenges remain in refining feedback mechanisms to ensure consistency and reliability, as well as supplementing the current functionality with a more feature-reach and customizable accessible learning experience which will allow visually impaired students to fully utilize interleaved haptic and audio feedback. The study underscores the transformative potential of haptic and audio feedback in educational practices for the visually impaired, setting a foundation for future research and development in accessible programming education. This thesis contributes to the field of accessible technology by demonstrating how tactile and auditory feedback can be effectively integrated into educational tools, thereby broadening accessibility in STEM education.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Switzerland (0.04)
- Instructional Material (1.00)
- Research Report > New Finding (0.92)
- Research Report > Experimental Study (0.67)
- Health & Medicine > Therapeutic Area (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Setting > K-12 Education (0.93)
Not All Visitors are Bilingual: A Measurement Study of the Multilingual Web from an Accessibility Perspective
Bhuiyan, Masudul Hasan Masud, Varvello, Matteo, Zaki, Yasir, Staicu, Cristian-Alexandru
English is the predominant language on the web, powering nearly half of the world's top ten million websites. Support for multilingual content is nevertheless growing, with many websites increasingly combining English with regional or native languages in both visible content and hidden metadata. This multilingualism introduces significant barriers for users with visual impairments, as assistive technologies like screen readers frequently lack robust support for non-Latin scripts and misrender or mispronounce non-English text, compounding accessibility challenges across diverse linguistic contexts. Yet, large-scale studies of this issue have been limited by the lack of comprehensive datasets on multilingual web content. To address this gap, we introduce LangCrUX, the first large-scale dataset of 120,000 popular websites across 12 languages that primarily use non-Latin scripts. Leveraging this dataset, we conduct a systematic analysis of multilingual web accessibility and uncover widespread neglect of accessibility hints. We find that these hints often fail to reflect the language diversity of visible content, reducing the effectiveness of screen readers and limiting web accessibility. We finally propose Kizuki, a language-aware automated accessibility testing extension to account for the limited utility of language-inconsistent accessibility hints.
- Asia > Russia (0.29)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia > India (0.07)
- (23 more...)
Hear Your Code Fail, Voice-Assisted Debugging for Python
Amiri, Sayed Mahbub Hasan, Islam, Md. Mainul, Hossen, Mohammad Shakhawat, Amiri, Sayed Majhab Hasan, Mamun, Mohammad Shawkat Ali, Kabir, Sk. Humaun, Akter, Naznin
This staggering performance drain translates to roughly $61 billion in yearly financial losses throughout the worldwide software market, as quantified by the Standish Group's 2023 analysis of advancement workflows. The core inefficiency stems from traditional debugging's visual - only paradigm, where deve lopers must manually parse dense, technical stack traces while mentally reconstructing error context a process requiring intense cognitive focus that fragments attention between code logic and exception diagnostics. Neuroergonomic research from MIT's Human - Computer Interaction Lab reveals this context - switching triggers measurable cognitive overload, increasing prefrontal cortex activation by 60% compared to focused coding tasks, ultimately leading to mental fatigue that compounds debugging errors. The accessibility limitations of conventional debugging tools create additional barriers for approximately 12.5% of professional developers with visual impairments (World Health Organization, 2024), who struggle with screen readers that poorly interpret te chnical tracebacks. As Sarah Parker, a blind Python developer at Microsoft, testified during the 2023 Accessible Tech Symposium: "NVDA reads exception blocks as disconnected fragments I spend more time reassembling error narratives than solving actual prob lems."
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.07)
- North America > United States > Washington > King County > Seattle (0.04)
- (2 more...)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education > Educational Setting (0.93)
Fast, Not Fancy: Rethinking G2P with Rich Data and Rule-Based Models
Qharabagh, Mahta Fetrat, Dehghanian, Zahra, Rabiee, Hamid R.
Homograph disambiguation remains a significant challenge in grapheme-to-phoneme (G2P) conversion, especially for low-resource languages. This challenge is twofold: (1) creating balanced and comprehensive homograph datasets is labor-intensive and costly, and (2) specific disambiguation strategies introduce additional latency, making them unsuitable for real-time applications such as screen readers and other accessibility tools. In this paper, we address both issues. First, we propose a semi-automated pipeline for constructing homograph-focused datasets, introduce the HomoRich dataset generated through this pipeline, and demonstrate its effectiveness by applying it to enhance a state-of-the-art deep learning-based G2P system for Persian. Second, we advocate for a paradigm shift - utilizing rich offline datasets to inform the development of fast, rule-based methods suitable for latency-sensitive accessibility applications like screen readers. To this end, we improve one of the most well-known rule-based G2P systems, eSpeak, into a fast homograph-aware version, HomoFast eSpeak. Our results show an approximate 30% improvement in homograph disambiguation accuracy for the deep learning-based and eSpeak systems.
- Asia > Middle East > Iran (0.04)
- North America > United States > Illinois (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.93)
- Information Technology > Artificial Intelligence > Speech (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
The Semantic Reader Project
The exponential growth in the rate of scientific publication4 and increasing interdisciplinary nature of scientific progress27 makes it increasingly hard for scholars to keep up with the latest developments. Academic search engines, such as Google Scholar and Semantic Scholar, help scholars discover research papers. Techniques such as automated summarization help scholars triage research papers.5 But when it comes to actually reading research papers, the process, often based on a static PDF format, has remained largely unchanged for many decades. This is a problem because digesting technical research papers in their conventional formats is difficult.2
The tech industry still has a long way to go when it comes to accessibility
As many in the accessibility community will tell you, inclusive design isn't an endeavor that's "one and done." It's a continuous, ongoing effort to ensure that as new products and services are made, people with different needs or disabilities aren't left out. Over the last three years, Engadget has produced a report, in addition to our regular coverage, that looks back on the developments in the tech industry that impact the accessibility community, focusing on the largest companies like Apple, Amazon, Google, Microsoft and Meta. Of course, there are plenty of other big companies to consider, like Uber, Airbnb, Netflix and more. But the six we've selected have an outsized impact and influence on the industry..
- Information Technology > Services (0.68)
- Media > Film (0.49)
AI empowering the Visually Impaired
Some See Disability as a Unique Individuality, and why not. Well, this individuality needs to be taught and explored and so here are a few applications and technologies developed for those with special needs of visuals and sight. Microsoft has released a new free app for Apple's iPhone, called Seeing AI, and it is generating a lot of interest in a short period of time. In less than a week, "techies" were giving glowing reviews to the app and podcasters were creating tutorials, which is all great news for consumers looking for an introduction to using the app. So what are the various means that AI is empowering the Visually Impaired?
- Information Technology > Communications > Mobile (0.91)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.35)
7 Ways Image Recognition Can Help Impaired Vision! Here's How
But with the help of technology in this visual age, images, videos are becoming more and more prevalent in today's lives. In the early days, social media was predominantly text-based but now technology has started to adapt according to the needs of people with impaired vision too. All thanks to modern technologies for their design, making navigating social media for giving a wonderful experience to also the visually impaired. Let's look at such one technology called image recognition which made life easier for people with impaired vision. Here are the 7 ways it can aid people.
Making data visualizations more accessible
In the early days of the Covid-19 pandemic, the Centers for Disease Control and Prevention produced a simple chart to illustrate how measures like mask wearing and social distancing could "flatten the curve" and reduce the peak of infections. The chart was amplified by news sites and shared on social media platforms, but it often lacked a corresponding text description to make it accessible for blind individuals who use a screen reader to navigate the web, shutting out many of the 253 million people worldwide who have visual disabilities. This alternative text is often missing from online charts, and even when it is included, it is frequently uninformative or even incorrect, according to qualitative data gathered by scientists at MIT. These researchers conducted a study with blind and sighted readers to determine which text is useful to include in a chart description, which text is not, and why. Ultimately, they found that captions for blind readers should focus on the overall trends and statistics in the chart, not its design elements or higher-level insights.
Making data visualizations more accessible
In the early days of the Covid-19 pandemic, the Centers for Disease Control and Prevention produced a simple chart to illustrate how measures like mask wearing and social distancing could "flatten the curve" and reduce the peak of infections. The chart was amplified by news sites and shared on social media platforms, but it often lacked a corresponding text description to make it accessible for blind individuals who use a screen reader to navigate the web, shutting out many of the 253 million people worldwide who have visual disabilities. This alternative text is often missing from online charts, and even when it is included, it is frequently uninformative or even incorrect, according to qualitative data gathered by scientists at MIT. These researchers conducted a study with blind and sighted readers to determine which text is useful to include in a chart description, which text is not, and why. Ultimately, they found that captions for blind readers should focus on the overall trends and statistics in the chart, not its design elements or higher-level insights.